45 research outputs found

    Slight-Delay Shaped Variable Bit Rate (SD-SVBR) Technique for Video Transmission

    Get PDF
    The aim of this thesis is to present a new shaped Variable Bit Rate (VBR) for video transmission, which plays a crucial role in delivering video traffic over the Internet. This is due to the surge of video media applications over the Internet and the video typically has the characteristic of a highly bursty traffic, which leads to the Internet bandwidth fluctuation. This new shaped algorithm, referred to as Slight Delay - Shaped Variable Bit Rate (SD-SVBR), is aimed at controlling the video rate for video application transmission. It is designed based on the Shaped VBR (SVBR) algorithm and was implemented in the Network Simulator 2 (ns-2). SVBR algorithm is devised for real-time video applications and it has several limitations and weaknesses due to its embedded estimation or prediction processes. SVBR faces several problems, such as the occurrence of unwanted sharp decrease in data rate, buffer overflow, the existence of a low data rate, and the generation of a cyclical negative fluctuation. The new algorithm is capable of producing a high data rate and at the same time a better quantization parameter (QP) stability video sequence. In addition, the data rate is shaped efficiently to prevent unwanted sharp increment or decrement, and to avoid buffer overflow. To achieve the aim, SD-SVBR has three strategies, which are processing the next Group of Picture (GoP) video sequence and obtaining the QP-to-data rate list, dimensioning the data rate to a higher utilization of the leaky-bucket, and implementing a QP smoothing method by carefully measuring the effects of following the previous QP value. However, this algorithm has to be combined with a network feedback algorithm to produce a better overall video rate control. A combination of several video clips, which consisted of a varied video rate, has been used for the purpose of evaluating SD-SVBR performance. The results showed that SD-SVBR gains an impressive overall Peak Signal-to-Noise Ratio (PSNR) value. In addition, in almost all cases, it gains a high video rate but without buffer overflow, utilizes the buffer well, and interestingly, it is still able to obtain smoother QP fluctuation

    Cache replacement positions in information-centric network

    Get PDF
    Information dissemination as the sole functionality driving the current Internet trend has been of keen interest for its manageability. Information Centric Network (ICN) proposed as a new paradigm shift to mitigate the predicted traffic of the current Internet.However, caching as an advantageous building block of ICN is faced with the challenges of content placement, content replacement and eviction.The current practice of ICN caching has given birth to the problems of content redundancy, path redundancy and excessive wastage of bandwidth.This study analyzes the intelligence in cache content management to palliate the gross expenses incurred in the ICN practice.The use of the current factors in previous studies in recency and frequency in content usage play delicate roles in our study. Replacement strategies are agreed to influence the entire cache-hit, stretch and Network diversity

    Prospective use of bloom filter and muxing for information centric network caching

    Get PDF
    Information dissemination as the main objective of setting up the Internet has seen different folds to improve its course.Information centric networking (ICN) has been introduced with the aim of curtailing some future challenges posed at the traditional Internet in the nearest future.ICN advantage of caching chunks of information on-path and off-path of the network stands the paradigm out as an alternative shift from host centric network to name centric.ICN caching approach can thereby significantly reduce amount of times a host is visited.Bloom Filter with its advantage of fast searching and false positivity characteristics are seen as form of message retrieval practice to improve interest serving on the network.This paper analyzed the advantages of vending and adopting Bloom Filters and Muxing as research directions to minimize excessive bandwidth consumption, lesser delays, prompt delivery of information, higher throughput and the ability to share information from troubled stations.Concepts are proposed and wider algorithms are pointed out to increase the overall ICN framework as related to caching and other network services

    A Fairness Investigation on Active Queue Management Schemes in Wireless Local Area Network

    Get PDF
    Active Queue Management (AQM) is scheme to handle network congestion before it happened by deciding which packet has to be dropped, when to drop it, and through which port have to drop when it has become or is becoming congested. Furthermore, AQM schemes such as Random Early Detection (RED), Random Early Marking (REM), Adaptive Virtual Queue (AVQ), and Controlled Delay (CoDel) have been proposed to maintain fairness when unresponsive constant bit rate UDP flows share a bottleneck link with responsive TCP traffic. However, the performance of these fair AQM schemes need more investigation especially evaluation in WLANs environment. This paper provides an experimental evaluation of different AQM schemes in WLAN environment with presence of two different types of flows (TCP flows and UDP flows) to study the behavior of these AQM schemes which might punish some flows unfairly. The simulation method has conducted in this paper by using Network Simulation 2 (ns-2) with the topology of bottleneck scenario. The result has shown that REM and AVQ both obtain higher fairness value than RED and Codel. However, CoDel has given the lowest fairness comparing with RED scheme which have given a moderated value in terms of fairness in WLANs environment. Besides, AQM schemes must be chosen not only based on its performance or capability to indicate the congestion and recovering overflow situation but also considering fairness with different types of flows and the environment as well, such as WLANs environment

    Proposed Algorithm for Scheduling in Computational Grid using Backfilling and Optimization Techniques

    Get PDF
    In recent years, the fast evolution in the industry of computer hardware such as the processors, has led the application developers to design advanced software's that require massive computational power. Thus, grid computing has emerged in order to handle the computational power demands requested by the applications. Quality of service (QoS) in grid is highly required in order to provide a high service level to the users of Grid. Several interactions events are involved in determining the QoS level in grid such as; allocating the resources for the jobs, monitoring the performance of the selected resources and the computing capability of the available resources. To allocate the suitable resources for the incoming jobs, a scheduling algorithm has to manage this process. In this paper, we provide a critical review the recent mechanisms in “grid computing” environment. In addition, we propose a new scheduling algorithm to minimize the delay for the end user, Gap Filling policy will be applied to improve the performance of the priority algorithm. Then, an optimization algorithm will perform in order to further enhance the initial result for that obtained from backfilling mechanism. The main aim of the proposed scheduling mechanism is to improve the QoS for the end user in a real grid computing environment

    Cache-skip approach for information-centric network

    Get PDF
    Several ICN cache deployment and management techniques have since been using the Web management techniques to manage information sharing and better cache-hit ratio.Leave Copy Down, Leave Copy Everywhere and Probabilistic cache managements have gained more attention. However, with Leave Copy Everywhere being the initial design specification in ICN proposal, several research issues of content manageability have posed a threat of particularly content and path redundancy.This paper presents an extensive simulation analysis of the popular cache management techniques by subjecting the concepts into different network topologies to investigate the prospect of extending and proposing a new form of cache management in ICN known as Cache-skip. Cache-skip use the consciousness of time of request, network size and Time Since Birth (TSB) and Time Since Inception (TSI) to carefully dedicate the positions of caching to benefits hit rates and less network stress as a form to efficiently utilize the bandwidth and enhance hits

    Internet protocol MANET vs named data MANET: A critical evaluation

    Get PDF
    Many researches have been done in the field of mobile networking, specifically in the field of ad-hoc networks.The major aim of these networks is the delivery of data to a given node at the destination, irrespective of its location.Mobile Ad-hoc Network (MANET) employs the traditional TCP/IP structure to provide end-to-end communication between nodes (we named this type of architecture is IP-MANET).However, due to their mobility and the limited resource in wireless networks, each layer in the TCP/IP model requires redefinition or modifications to function efficiently in MANET. Named Data MANET (NDMANET) architecture is a recently emerging research area. The in-network chunk-based caching feature of NDN is beneficial in coping with the mobility and intermittent connectivity challenges in MANETs.In the natural disaster field, MANET is considered a challenging task because of the unpredictable changes in the network topology due to the absence of any centralized control.The goals of this paper have two ways: first, this study provides a performance comparison of IP-MANET to ND-MANET in terms of throughput, delay, and packet loss.While the second contribution is to identify which architecture has an impact on the natural disaster (i.e., Flooding disaster) in rural areas and suggests which one may perform better.For experimental purposes, our analyses IP-MANET and ND-MANET by extensive simulations in the NS 3 simulator under a number of different network scenarios, and show that how number of nodes and variety packets size affect their performance

    Grid Federation: Number of Jobs and File Size Effects on Jobs Time

    Get PDF
    Grid federation is fast emerging as an alternative solution to the problems posed by the large data handling and computational needs of the existing numerous worldwide scientific projects. Efficient access to such extensively distributed data sets has become a fundamental challenge in grid computing. Creating and placing replicas to suitable sites, using data replication mechanisms can increase the system’s performance. Data Replication reduces data access time, ensures load balancing as well as narrows bandwidth consumption. In this paper, an enhanced data replication mechanism called EDR is proposed. EDR applies the principle of exponential growth/decay to both file size and file access history, based on the Latest Access Largest Weight (LALW) mechanism. The mechanism selects a popular file and determines an appropriate number of replicas as well as suitable grid sites for replication. It establishes the popularity of each file by associating a different weight to each historical data access record. Typically, recent data access record has a larger weight, which signifies that the record is more relevant to the current situation of data access. By varying the number of jobs as well as file sizes, the proposed EDR mechanism was simulated using file size and job completion time as the variable metrics. Optorsim simulator was used to evaluate the proposed mechanism alongside the existing Least Recently Used (LRU), and Least Frequently Used (LFU) Mechanisms. The simulation results showed that job completion time increases by the growth in both file size and number of jobs. EDR shows improved performance on the mean job completion time, compared to LRU and LFU mechanisms

    Content caching in ICN using Bee-Colony optimization algorithm

    Get PDF
    Information dissemination has recently been overtaken by the huge media-driven data shared across different platforms.Future Internet shall greatly be concerned about pervasion and ubiquity of data on all devices.Information-Centric Network seems the challenging paradigm that aims at guaranteeing the flexibility needed when the data explosion occurs.Caching is thus an option that provides the flexibility that manages data exchange practices. Different caching issues has raised concern about the content flooded all over the Internet.In line with the challenges, Bee-Colony Optimization Algorithm (B-COA) has been proposed in this paper to avail content on the Internet with less referral cost and heavy monopoly of data on hosts.It is believed that the advantages of the grouping and waggle phase could be used to place the contents faster in ICN

    A survey on buffer and rate adaptation optimization in TCP-based streaming media studies

    Get PDF
    Contrary to the popular conventional wisdom that the best transport protocol for the streaming media is UDP, many findings found that most of the transport protocols used nowadays are TCP. Two main reasons that UDP is not being used widely are it is not friendly to other flows and some organizations are blocking this protocol. In the meantime,TCP is naturally reliable and friendly to other flows. But with so many controls inbuilt in the protocol; such as congestion control, flow control, and others with the heavy acknowledgement mechanism, resulting delays and jitters.Thus it’s naturally not friendly to the streaming media.But with all the inherited weaknesses, we have seen explosive growth of streaming media in the Internet. With these contrasting premises, it is very interesting to study and investigate the streaming media via TCP transport protocol,specifically on buffer and rate adaptation optimization
    corecore